在线社交网络比以往任何时候都更加激发了互联网的通信,这使得在此类嘈杂渠道上传输秘密消息是可能的。在本文中,我们提出了一个名为CIS-NET的无封面图像隐志网络,该网络合成了直接在秘密消息上传输的高质量图像。 CIS-NET由四个模块组成,即生成,对抗,提取和噪声模块。接收器可以提取隐藏的消息而不会损失任何损失,即使图像已被JPEG压缩攻击扭曲。为了掩盖隐肌的行为,我们在个人资料照片和贴纸的背景下收集了图像,并相应地训练了我们的网络。因此,生成的图像更倾向于摆脱恶意检测和攻击。与先前的图像隐志方法相比,区分主要是针对各种攻击的鲁棒性和无损性。各种公共数据集的实验已经表现出抗坚果分析的卓越能力。
translated by 谷歌翻译
视频容易篡改攻击,从而改变含义并欺骗观众。以前的视频伪造检测方案找到了微小的线索来定位篡改区域。但是,攻击者可以通过使用视频压缩或模糊破坏此类线索来成功逃避监督。本文提出了一个视频水印网络,用于篡改本地化。我们共同训练一个基于3D-UNET的水印嵌入网络和一个预测篡改面罩的解码器。水印嵌入产生的扰动几乎是无法察觉的。考虑到没有现成的可区分的视频编解码器模拟器,我们建议通过结合其他典型攻击的模拟结果来模仿视频压缩,例如JPEG压缩和模糊,作为近似值。实验结果表明,我们的方法生成具有良好不可识别的水印视频,并且在攻击版本中可以稳健,准确地定位篡改区域。
translated by 谷歌翻译
互联网技术的发展不断增强谣言和虚假新闻的传播和破坏力。先前关于多媒体假新闻检测的研究包括一系列复杂的功能提取和融合网络,以实现图像和文本之间的特征对齐。但是,多模式功能由什么组成,以及来自不同模式的特征如何影响决策过程仍然是开放的问题。我们介绍了Aura,这是一个具有自适应单峰表示聚合的多模式假新闻检测网络。我们首先从图像模式,图像语义和文本中分别提取表示形式,并通过将语义和语言表示形式发送到专家网络来生成多模式表示。然后,我们根据单峰和多模式表示,进行粗级的虚假新闻检测和跨模式宇宙性学习。分类和一致性得分被映射到模态感知的注意分数,以重新调整功能。最后,我们汇总并将加权功能分类用于精制的假新闻检测。关于微博和八卦的综合实验证明,Aura可以成功击败几个最先进的FND方案,在该方案中,整体预测准确性和对假新闻的回忆得到稳步改善。
translated by 谷歌翻译
图像裁剪是一种廉价而有效的恶意改变图像内容的操作。现有的裁剪检测机制分析了图像裁剪的基本痕迹,例如色差和渐晕,以发现种植攻击。但是,它们在常见的后处理攻击方面脆弱,通过删除此类提示,欺骗取证。此外,他们忽略了这样一个事实,即恢复裁剪的内容可以揭示出行为造成攻击的目的。本文提出了一种新型的强大水印方案,用于图像裁剪定位和恢复(CLR-NET)。我们首先通过引入不可察觉的扰动来保护原始图像。然后,模拟典型的图像后处理攻击以侵蚀受保护的图像。在收件人方面,我们预测裁剪面膜并恢复原始图像。我们提出了两个即插即用网络,以改善CLR-NET的现实鲁棒性,即细粒生成性JPEG模拟器(FG-JPEG)和Siamese图像预处理网络。据我们所知,我们是第一个解决图像裁剪本地化和整个图像从片段中恢复的综合挑战的人。实验表明,尽管存在各种类型的图像处理攻击,但CLR-NET可以准确地定位裁剪,并以高质量和忠诚度恢复裁剪区域的细节。
translated by 谷歌翻译
深度学习在各种工业应用中取得了巨大成功。公司不希望他们的宝贵数据被恶意员工偷来培训盗版模式。他们也不希望竞争对手在线使用后分析的数据。我们提出了一种新的解决方案,在这种情况下,通过稳健地并可逆地将图像转换为对手图像。我们开发一个可逆的对抗性示例生成器(Raeg),对图像引入略微变化以欺骗传统的分类模型。尽管恶意攻击培训基于Deacened版本的受保护图像的盗版模型,但Raeg可以显着削弱这些模型的功能。同时,Raeg的可逆性确保了授权模型的表现。广泛的实验表明,Raeg可以通过比以前的方法更好地防止对抗对抗防御的轻微扭曲。
translated by 谷歌翻译
Decompilation aims to transform a low-level program language (LPL) (eg., binary file) into its functionally-equivalent high-level program language (HPL) (e.g., C/C++). It is a core technology in software security, especially in vulnerability discovery and malware analysis. In recent years, with the successful application of neural machine translation (NMT) models in natural language processing (NLP), researchers have tried to build neural decompilers by borrowing the idea of NMT. They formulate the decompilation process as a translation problem between LPL and HPL, aiming to reduce the human cost required to develop decompilation tools and improve their generalizability. However, state-of-the-art learning-based decompilers do not cope well with compiler-optimized binaries. Since real-world binaries are mostly compiler-optimized, decompilers that do not consider optimized binaries have limited practical significance. In this paper, we propose a novel learning-based approach named NeurDP, that targets compiler-optimized binaries. NeurDP uses a graph neural network (GNN) model to convert LPL to an intermediate representation (IR), which bridges the gap between source code and optimized binary. We also design an Optimized Translation Unit (OTU) to split functions into smaller code fragments for better translation performance. Evaluation results on datasets containing various types of statements show that NeurDP can decompile optimized binaries with 45.21% higher accuracy than state-of-the-art neural decompilation frameworks.
translated by 谷歌翻译
Recently the deep learning has shown its advantage in representation learning and clustering for time series data. Despite the considerable progress, the existing deep time series clustering approaches mostly seek to train the deep neural network by some instance reconstruction based or cluster distribution based objective, which, however, lack the ability to exploit the sample-wise (or augmentation-wise) contrastive information or even the higher-level (e.g., cluster-level) contrastiveness for learning discriminative and clustering-friendly representations. In light of this, this paper presents a deep temporal contrastive clustering (DTCC) approach, which for the first time, to our knowledge, incorporates the contrastive learning paradigm into the deep time series clustering research. Specifically, with two parallel views generated from the original time series and their augmentations, we utilize two identical auto-encoders to learn the corresponding representations, and in the meantime perform the cluster distribution learning by incorporating a k-means objective. Further, two levels of contrastive learning are simultaneously enforced to capture the instance-level and cluster-level contrastive information, respectively. With the reconstruction loss of the auto-encoder, the cluster distribution loss, and the two levels of contrastive losses jointly optimized, the network architecture is trained in a self-supervised manner and the clustering result can thereby be obtained. Experiments on a variety of time series datasets demonstrate the superiority of our DTCC approach over the state-of-the-art.
translated by 谷歌翻译
Recent CLIP-guided 3D optimization methods, e.g., DreamFields and PureCLIPNeRF achieve great success in zero-shot text-guided 3D synthesis. However, due to the scratch training and random initialization without any prior knowledge, these methods usually fail to generate accurate and faithful 3D structures that conform to the corresponding text. In this paper, we make the first attempt to introduce the explicit 3D shape prior to CLIP-guided 3D optimization methods. Specifically, we first generate a high-quality 3D shape from input texts in the text-to-shape stage as the 3D shape prior. We then utilize it as the initialization of a neural radiance field and then optimize it with the full prompt. For the text-to-shape generation, we present a simple yet effective approach that directly bridges the text and image modalities with a powerful text-to-image diffusion model. To narrow the style domain gap between images synthesized by the text-to-image model and shape renderings used to train the image-to-shape generator, we further propose to jointly optimize a learnable text prompt and fine-tune the text-to-image diffusion model for rendering-style image generation. Our method, namely, Dream3D, is capable of generating imaginative 3D content with better visual quality and shape accuracy than state-of-the-art methods.
translated by 谷歌翻译
Adversarial patch is an important form of real-world adversarial attack that brings serious risks to the robustness of deep neural networks. Previous methods generate adversarial patches by either optimizing their perturbation values while fixing the pasting position or manipulating the position while fixing the patch's content. This reveals that the positions and perturbations are both important to the adversarial attack. For that, in this paper, we propose a novel method to simultaneously optimize the position and perturbation for an adversarial patch, and thus obtain a high attack success rate in the black-box setting. Technically, we regard the patch's position, the pre-designed hyper-parameters to determine the patch's perturbations as the variables, and utilize the reinforcement learning framework to simultaneously solve for the optimal solution based on the rewards obtained from the target model with a small number of queries. Extensive experiments are conducted on the Face Recognition (FR) task, and results on four representative FR models show that our method can significantly improve the attack success rate and query efficiency. Besides, experiments on the commercial FR service and physical environments confirm its practical application value. We also extend our method to the traffic sign recognition task to verify its generalization ability.
translated by 谷歌翻译
With the increase in health consciousness, noninvasive body monitoring has aroused interest among researchers. As one of the most important pieces of physiological information, researchers have remotely estimated the heart rate (HR) from facial videos in recent years. Although progress has been made over the past few years, there are still some limitations, like the processing time increasing with accuracy and the lack of comprehensive and challenging datasets for use and comparison. Recently, it was shown that HR information can be extracted from facial videos by spatial decomposition and temporal filtering. Inspired by this, a new framework is introduced in this paper to remotely estimate the HR under realistic conditions by combining spatial and temporal filtering and a convolutional neural network. Our proposed approach shows better performance compared with the benchmark on the MMSE-HR dataset in terms of both the average HR estimation and short-time HR estimation. High consistency in short-time HR estimation is observed between our method and the ground truth.
translated by 谷歌翻译